在软件开发过程中,开发人员需要回答有关代码语义方面的查询。即使已经用神经方法进行了广泛的自然语言研究,但尚未探索使用神经网络对代码回答语义查询的问题。这主要是因为没有现有的数据集,具有提取性问答和答案对,涉及复杂概念和较长推理的代码。我们通过构建一个名为Codequeries的新的,策划的数据集并提出了一种关于代码的神经问题方法来弥合这一差距。我们基于最先进的预训练的代码模型,以预测答案和支持事实跨度。给定查询和代码,只有一些代码可能与回答查询有关。我们首先在理想的环境下进行实验,其中仅给出了模型的相关代码,并表明我们的模型做得很好。然后,我们在三个务实的考虑因素下进行实验:(1)扩展到大尺寸的代码,(2)从有限数量的示例中学习,(3)代码中对次要语法错误的鲁棒性。我们的结果表明,虽然神经模型可以抵御代码中的次要语法错误,代码的大小增加,与查询无关的代码的存在以及减少的培训示例数量限制了模型性能。我们正在释放数据和模型,以促进未来关于回答代码语义查询的问题的工作。
translated by 谷歌翻译
彩票票证假设(LTH)指出,对于合理尺寸的神经网络,同一网络中的子网络的性能不如接受相同初始化训练时的密集对应。这项工作调查了模型大小与查找这些稀疏子网络的易用性之间的关系。我们通过实验表明,令人惊讶的是,在有限的预算下,较小的型号从票务搜索(TS)中受益更多。
translated by 谷歌翻译
本文介绍了一种控制算法,用于引导具有未知惯性的两个轮式移动机器人,以使用自适应模型预测控制(AMPC)框架的所需点和取向。两个轮式移动机器人被建模为刀刃或带有非完整运动约束的冰鞋,使用拉格朗日方法导出动态方程。每次即时的输入都是从模型预测控制(MPC)获得的,其中一组标称参数使用递归最小二乘算法更新。通过纸张末尾的数值模拟来证明算法的功效。
translated by 谷歌翻译
分布式深度学习框架,如联合学习(FL)及其变体都是在广泛的Web客户端和移动/ IOT设备上实现个性化体验。然而,由于模型参数的爆炸增长(例如,十亿参数模型),基于FL的框架受到客户的计算资源的限制。拆分学习(SL),最近的框架,通过拆分客户端和服务器之间的模型培训来减少客户端计算负载。这种灵活性对于低计算设置非常有用,但通常以带宽消耗的增加成本而实现,并且可能导致次优化会聚,尤其是当客户数据异构时。在这项工作中,我们介绍了adasplit,通过降低带宽消耗并提高异构客户端的性能,使得能够将SL有效地缩放到低资源场景。为了捕获和基准的分布式深度学习的多维性质,我们还介绍了C3分数,是评估资源预算下的性能。我们通过与强大联邦和分裂学习基线的大量实验比较进行了大量实验比较,验证了adasplit在有限的资源下的有效性。我们还展示了adasplit中关键设计选择的敏感性分析,该选择验证了adasplit在可变资源预算中提供适应性权衡的能力。
translated by 谷歌翻译
变压器已经看到了自然语言处理和计算机视觉任务的前所未有的上升。但是,在音频任务中,由于音频波形的极大序列长度或在培训基于傅立叶特征时,它们是不可行的。在这项工作中,我们介绍了一个架构,Audiomer,在那里我们将1D残差网络与表演者的注意力结合起来,以实现使用原始音频波形的关键字在关键字中实现最先进的性能,优先于以前的所有方法,同时计算更便宜和参数效率。此外,我们的模型具有语音处理的实际优点,例如由于缺乏位置编码而在任意长的音频剪辑上推断。代码可在https://github.com/the-learning-machines/dautiomer获得
translated by 谷歌翻译
基于会话的推荐系统通过使用短期匿名会话建模用户行为和偏好来建立对用户的相关项目。现有方法利用图形神经网络(GNNS)传播和聚合来自邻居节点的信息I.E.,本地消息传递。这种基于图形的架构具有代表性限制,因为单个子图易于过度填写顺序依赖,而不是考虑不同会话中的项目之间的复杂转换。我们提出了一种新的技术,使变压器与目标关节GNN结合使用。这允许学习更丰富的表示,与Vanilla目标注意GNN相比,这转化为经验性能提升。我们的实验结果和消融表明,我们的建议方法与现有的现实世界基准数据集的现有方法具有竞争力,从而改善了基于图形的假设。代码在https://github.com/the-learning-machines/sbr
translated by 谷歌翻译
Purpose: Tracking the 3D motion of the surgical tool and the patient anatomy is a fundamental requirement for computer-assisted skull-base surgery. The estimated motion can be used both for intra-operative guidance and for downstream skill analysis. Recovering such motion solely from surgical videos is desirable, as it is compliant with current clinical workflows and instrumentation. Methods: We present Tracker of Anatomy and Tool (TAToo). TAToo jointly tracks the rigid 3D motion of patient skull and surgical drill from stereo microscopic videos. TAToo estimates motion via an iterative optimization process in an end-to-end differentiable form. For robust tracking performance, TAToo adopts a probabilistic formulation and enforces geometric constraints on the object level. Results: We validate TAToo on both simulation data, where ground truth motion is available, as well as on anthropomorphic phantom data, where optical tracking provides a strong baseline. We report sub-millimeter and millimeter inter-frame tracking accuracy for skull and drill, respectively, with rotation errors below 1{\deg}. We further illustrate how TAToo may be used in a surgical navigation setting. Conclusion: We present TAToo, which simultaneously tracks the surgical tool and the patient anatomy in skull-base surgery. TAToo directly predicts the motion from surgical videos, without the need of any markers. Our results show that the performance of TAToo compares favorably to competing approaches. Future work will include fine-tuning of our depth network to reach a 1 mm clinical accuracy goal desired for surgical applications in the skull base.
translated by 谷歌翻译
Reliable and cost-effective counting of people in large indoor spaces is a significant challenge with many applications. An emerging approach is to deploy multiple fisheye cameras mounted overhead to monitor the whole space. However, due to the overlapping fields of view, person re-identificaiton (PRID) is critical for the accuracy of counting. While PRID has been thoroughly researched for traditional rectilinear cameras, few methods have been proposed for fisheye cameras and their performance is comparatively lower. To close this performance gap, we propose a multi-feature framework for fisheye PRID where we combine deep-learning, color-based and location-based features by means of novel feature fusion. We evaluate the performance of our framework for various feature combinations on FRIDA, a public fisheye PRID dataset. The results demonstrate that our multi-feature approach outperforms recent appearance-based deep-learning methods by almost 18% points and location-based methods by almost 3% points in accuracy.
translated by 谷歌翻译
Recent advances in deep learning research, such as transformers, have bolstered the ability for automated agents to generate creative texts similar to those that a human would write. By default, transformer decoders can only generate new text with respect to previously generated text. The output distribution of candidate tokens at any position is conditioned on previously selected tokens using a self-attention mechanism to emulate the property of autoregression. This is inherently limiting for tasks such as controllable story generation where it may be necessary to condition on future plot events when writing a story. In this work, we propose Future Sight, a method for finetuning a pretrained generative transformer on the task of future conditioning. Transformer decoders are typically pretrained on the task of completing a context, one token at a time, by means of self-attention. Future Sight additionally enables a decoder to attend to an encoded future plot event. This motivates the decoder to expand on the context in a way that logically concludes with the provided future. During inference, the future plot event can be written by a human author to steer the narrative being generated in a certain direction. We evaluate the efficacy of our approach on a story generation task with human evaluators.
translated by 谷歌翻译
With the growing global deployment of carbon capture and sequestration technology to combat climate change, monitoring and detection of potential CO2 leakage through existing or storage induced faults are critical to the safe and long-term viability of the technology. Recent work on time-lapse seismic monitoring of CO2 storage has shown promising results in its ability to monitor the growth of the CO2 plume from surface recorded seismic data. However, due to the low sensitivity of seismic imaging to CO2 concentration, additional developments are required to efficiently interpret the seismic images for leakage. In this work, we introduce a binary classification of time-lapse seismic images to delineate CO2 plumes (leakage) using state-of-the-art deep learning models. Additionally, we localize the leakage region of CO2 plumes by leveraging Class Activation Mapping methods.
translated by 谷歌翻译